Abstract
Intelligent systems that use Machine Learning classification algorithms are increasingly common in everyday society. However, many systems use black-box models that do not have characteristics that allow for self-explanation of their predictions. This situation leads researchers in the field and society to the following question: How can I trust the prediction of a model I cannot understand? In this sense, XAI emerges as a field of AI that aims to create techniques capable of explaining the decisions of the classifier to the end-user. As a result, several techniques have emerged, such as Explanation-by-Example, which has a few initiatives consolidated by the community currently working with XAI. This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach. To this end, four datasets with different levels of complexity were used, and the Random Forest model was used as a hypothesis test. From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Model-Agnostic: it does not depend on the type of model to be explained [18].
- 2.
All results can be accessed at: https://github.com/LucasFerraroCardoso/IRT_XAI.
References
Abdi, H., Valentin, D.: Multiple correspondence analysis. Encycl. Meas. Stat. 2(4), 651–657 (2007)
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Baker, F.B.: The basics of item response theory (2001). http://ericae.net/irt/baker
Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331 (2018)
Cardoso, L.F.F., Santos, V.C.A., Francês, R.S.K., Prudêncio, R.B.C., Alves, R.C.O.: Decoding machine learning benchmarks. In: Cerri, R., Prati, R.C. (eds.) BRACIS 2020. LNCS (LNAI), vol. 12320, pp. 412–425. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61380-8_28
Chicco, D., Jurman, G.: The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genomics 21(1), 1–13 (2020)
Geirhos, R., et al.: Shortcut learning in deep neural networks. Nat. Mach. Intel. 2(11), 665–673 (2020)
Gilpin, L.H., et al.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). IEEE (2018)
Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv:2107.07045 (2021)
Guidotti, R., et al.: A survey of methods for explaining black box models. ACM Comput. Sur. (CSUR) 51(5), 1–42 (2018)
Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)
Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)
Kim, B., Rajiv K., Koyejo, O.O.: Examples are not enough, learn to criticize! criticism for interpretability. In: Advances in Neural Information Processing Systems 29 (2016)
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning. PMLR (2017)
Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23(1), 18 (2020)
Martínez-Plumed, F., et al.: Item response theory in AI: Analysing machine learning classifiers at the instance level. Artifi. Intel. 271, 18–42 (2019)
Molnar, C.: Interpretable machine learning (2020). Lulu.com
Molnar, C., Casalicchio, G., Bischl, B.: Interpretable machine learning – a brief history, state-of-the-art and challenges. In: Koprinska, I., et al. (eds.) ECML PKDD 2020. CCIS, vol. 1323, pp. 417–431. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65965-3_28
Naiseh, M., et al.: Explainable recommendation: when design meets trust calibration. World Wide Web 24(5), 1857–1884 (2021)
Regulation, P.: General data protection regulation (GDPR). Intersoft Consulting. Accessed October 24 Jan 2018
Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Ribeiro, J., et al.: Does dataset complexity matters for model explainers?. In: 2021 IEEE International Conference on Big Data (Big Data). IEEE (2021)
Sabatine, M.S., Cannon, C.P.: Approach to the patient with chest pain. In: Braunwald’s Heart Disease: A Textbook of Cardiovascular Medicine. 9th edn., pp. 1076–1086. Elsevier/Saunders, Philadelphia (2012)
Vanschoren, J., et al.: OpenML: networked science in machine learning. ACM SIGKDD Explor. Newsl. 15(2), 49–60 (2014)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cardoso, L.F.F. et al. (2022). Explanation-by-Example Based on Item Response Theory. In: Xavier-Junior, J.C., Rios, R.A. (eds) Intelligent Systems. BRACIS 2022. Lecture Notes in Computer Science(), vol 13653. Springer, Cham. https://doi.org/10.1007/978-3-031-21686-2_20
Download citation
DOI: https://doi.org/10.1007/978-3-031-21686-2_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21685-5
Online ISBN: 978-3-031-21686-2
eBook Packages: Computer ScienceComputer Science (R0)